142 research outputs found

    Deterministic and Stochastic Prisoner's Dilemma Games: Experiments in Interdependent Security

    Get PDF
    This paper examines experiments on interdependent security prisoner's dilemma games with repeated play. By utilizing a Bayesian hierarchical model, we examine how subjects make investment decisions as a function of their previous experience and their treatment condition. Our main findings are that individuals have differing underlying propensities to invest that vary across time, are affected by both the stochastic nature of the game and even more so by an individual's ability to learn about his or her counterpart's choices. Implications for individual decisions and the likely play of a person's counterpart are discussed in detail.

    Customer Acquisition via Display Advertising Using Multi-Armed Bandit Experiments

    Get PDF
    Firms using online advertising regularly run experiments with multiple versions of their ads since they are uncertain about which ones are most effective. During a campaign, firms try to adapt to intermediate results of their tests, optimizing what they earn while learning about their ads. Yet how should they decide what percentage of impressions to allocate to each ad? This paper answers that question, resolving the well-known “learn-and-earn” trade-off using multi-armed bandit (MAB) methods. The online advertiser’s MAB problem, however, contains particular challenges, such as a hierarchical structure (ads within a website), attributes of actions (creative elements of an ad), and batched decisions (millions of impressions at a time), that are not fully accommodated by existing MAB methods. Our approach captures how the impact of observable ad attributes on ad effectiveness differs by website in unobserved ways, and our policy generates allocations of impressions that can be used in practice. We implemented this policy in a live field experiment delivering over 750 million ad impressions in an online display campaign with a large retail bank. Over the course of two months, our policy achieved an 8% improvement in the customer acquisition rate, relative to a control policy, without any additional costs to the bank. Beyond the actual experiment, we performed counterfactual simulations to evaluate a range of alternative model specifications and allocation rules in MAB policies. Finally, we show that customer acquisition would decrease by about 10% if the firm were to optimize click-through rates instead of conversion directly, a finding that has implications for understanding the marketing funnel

    Model Selection Using Database Characteristics: Developing a Classification Tree for Longitudinal Incidence Data

    Get PDF
    When managers and researchers encounter a data set, they typically ask two key questions: (1) Which model (from a candidate set) should I use? And (2) if I use a particular model, when is it going to likely work well for my business goal? This research addresses those two questions and provides a rule, i.e., a decision tree, for data analysts to portend the “winning model” before having to fit any of them for longitudinal incidence data. We characterize data sets based on managerially relevant (and easy-to-compute) summary statistics, and we use classification techniques from machine learning to provide a decision tree that recommends when to use which model. By doing the “legwork” of obtaining this decision tree for model selection, we provide a time-saving tool to analysts. We illustrate this method for a common marketing problem (i.e., forecasting repeat purchasing incidence for a cohort of new customers) and demonstrate the method\u27s ability to discriminate among an integrated family of a hidden Markov model (HMM) and its constrained variants. We observe a strong ability for data set characteristics to guide the choice of the most appropriate model, and we observe that some model features (e.g., the “back-and-forth” migration between latent states) are more important to accommodate than are others (e.g., the inclusion of an “off” state with no activity). We also demonstrate the method\u27s broad potential by providing a general “recipe” for researchers to replicate this kind of model classification task in other managerial contexts (outside of repeat purchasing incidence data and the HMM framework)

    What\u27s a Testlet and Why Do We Need Them?

    Get PDF
    In 1987, Wainer and Kiely proposed a name for a packet of test items that are administered together; they called such an aggregation a testlet. Testlets had been in existence for a long time prior to 1987, albeit without this euphonious appellation. They had typically been used to boost testing efficiency in situations that examined an individual\u27s ability to understand some sort of stimulus, for example, a reading passage, an information graph, a musical passage, or a table of numbers. In such situations, a substantial amount of examinee time is spent in processing the stimulus, and it was found to be wasteful of that effort to ask just one question about it. Consequently, large stimuli were typically paired with a set of questions. Experience helped to guide the number of questions that were used to form the testlet. It is easy to understand that if, for example, we were to ask some questions about a 250-word reading passage, we would find that as we wrote questions, it would get increasingly difficult to ask about something new. Thus, we would find that eventually the law of diminishing returns would set in and a new question would not be generating enough independent information about the examinee\u27s ability to justify asking it. In more technical language, we might say that the within-testlet dependence among items limits the information that is available from that 250-word passage

    A Cross-Cohort Changepoint Model for Customer-Base Analysis

    Get PDF
    We introduce a new methodology that can capture and explain differences across a series of cohorts of new customers in a repeat-transaction setting. More specifically, this new framework, which we call a vector changepoint model, exploits the underlying regime structure in a sequence of acquired customer cohorts to make predictive statements about new cohorts for which the firm has little or no longitudinal transaction data. To accomplish this, we develop our model within a hierarchical Bayesian framework to uncover evidence of (latent) regime changes for each cohort-level parameter separately, while disentangling cross-cohort changes from calendar-time changes. Calibrating the model using multicohort donation data from a nonprofit organization, we find that holdout predictions for new cohorts using this model have greater accuracy—and greater diagnostic value—compared to a variety of strong benchmarks. Our modeling approach also highlights the perils of pooling data across cohorts without accounting for cross-cohort shifts, thus enabling managers to quantify their uncertainty about potential regime changes and avoid “old data” aggregation bias

    Bayesian Estimation of Random-Coefficients Choice Models Using Aggregate Data

    Get PDF
    This article discusses the use of Bayesian methods for estimating logit demand models using aggregate data, i.e. information solely on how many consumers chose each product. We analyze two different demand systems: independent samples and consumer panel. Under the first system, there is a different and independent random sample of N consumers in each period and each consumer makes only a single purchase decision. Under the second system, the same N consumers make a purchase decision in each of T periods. The proposed methods are illustrated using simulated and real data, and managerial insights available via data augmentation are discussed in detail

    New Measures of Clumpiness for Incidence Data

    Get PDF
    In recent years, growing attention has been placed on the increasing pattern of ‘clumpy data’ in many empirical areas such as financial market microstructure, criminology and seismology, and digital media consumption to name just a few; but a well-defined and careful measurement of clumpiness has remained somewhat elusive. The related ‘hot hand’ effect has long been a widespread belief in sports, and has triggered a branch of interesting research which could shed some light on this domain. However, since many concerns have been raised about the low power of the existing ‘hot hand’ significance tests, we propose a new class of clumpiness measures which are shown to have higher statistical power in extensive simulations under a wide variety of statistical models for repeated outcomes. Finally, an empirical study is provided by using a unique dataset obtained from Hulu.com, an increasingly popular video streaming provider. Our results provide evidence that the ‘clumpiness phenomena’ is widely prevalent in digital content consumption, which supports the lore of ‘bingeability’ of online content believed to exist today

    A Bayesian approach for predicting the popularity of tweets

    Get PDF
    We predict the popularity of short messages called tweets created in the micro-blogging site known as Twitter. We measure the popularity of a tweet by the time-series path of its retweets, which is when people forward the tweet to others. We develop a probabilistic model for the evolution of the retweets using a Bayesian approach, and form predictions using only observations on the retweet times and the local network or "graph" structure of the retweeters. We obtain good step ahead forecasts and predictions of the final total number of retweets even when only a small fraction (i.e., less than one tenth) of the retweet path is observed. This translates to good predictions within a few minutes of a tweet being posted, and has potential implications for understanding the spread of broader ideas, memes, or trends in social networks.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS741 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Measuring Multi-Channel Advertising Effectiveness Using Consumer-Level Advertising Response Data

    Get PDF
    Advances in data collection have made it increasingly easy to collect information on advertising exposures. However, translating this seemingly rich data into measures of advertising response has proven difficult, largely because of concerns that advertisers target customers with a higher propensity to buy or increase advertising during periods of peak demand. We show how this problem can be addressed by studying a setting where a firm randomly held out customers from each campaign, creating a sequence of randomized field experiments that mitigates (many) potential endogeneity problems. Exploratory analysis of individual holdout experiments shows positive effects for both email and catalog; however, the estimated effect for any individual campaign is imprecise, because of the small size of the holdout. To pool data across campaigns, we develop a hierarchical Bayesian model for advertising response that allows us to account for individual differences in purchase propensity and marketing response. Building on the traditional ad-stock framework, we are able to estimate separate decay rates for each advertising medium, allowing us to predict channel-specific short- and long-term effects of advertising and use these predictions to inform marketing strategy. We find that catalogs have substantially longer-lasting impact on customer purchase than emails. We show how the model can be used to score and target individual customers based on their advertising responsiveness, and we find that targeting the most responsive customers increases the predicted returns on advertising by approximately 70% versus traditional recency, frequency, and monetary value-based targeting
    • 

    corecore